Implementation Based on Hadoop Ophthalmic Imaging

نویسندگان

  • Xuguang Zhao
  • Shudong Zhang
  • Zhongshan Ren
چکیده

With the rapid development of ophthalmic treatment sector, the realtime access to vast amounts of data of patient treatment and image information plays a particularly important role in the aspect of medical imaging storage and management. Nowadays, the cloud platform of distributed storage based on Hadoop is becoming more and more popular in the field of industry; but the Hadoop is for the storage of large files while the ophthalmic image and other medical images are generally of 500KB size, or even smaller. They are stored in the Hadoop, which results in its bottleneck. As a result, various small images merge to file into a sequence of operations as far as possible and stores the merged file into the DataNode, reducing the number of files stored in HDFS. And it is not immediate into the HDFS data nodes when the small file has to be transferred after combination. It is put into the cache server and used to wait for Serialization File, and finally uploaded to the HDFS so that a large amount of ophthalmic images can be effectively stored in the Hadoop platform.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adaptive Dynamic Data Placement Algorithm for Hadoop in Heterogeneous Environments

Hadoop MapReduce framework is an important distributed processing model for large-scale data intensive applications. The current Hadoop and the existing Hadoop distributed file system’s rack-aware data placement strategy in MapReduce in the homogeneous Hadoop cluster assume that each node in a cluster has the same computing capacity and a same workload is assigned to each node. Default Hadoop d...

متن کامل

Design and Implementation of K-Means and Hierarchical Document Clustering on Hadoop

Document clustering is one of the important areas in data mining. Hadoop is being used by the Yahoo, Google, Face book and Twitter business companies for implementing real time applications. Email, social media blog, movie review comments, books are used for document clustering. This paper focuses on the document clustering using Hadoop. Hadoop is the new technology used for parallel computing ...

متن کامل

Optimizing data management for MapReduce applications on large-scale distributed infrastructures

ions were developed based on MapReduce, with the goal of providing a simple-touse interface for expressing database-like queries [64, 6]. Bioinformatics is one of the numerous research domains that employ MapReduce to model their algorithms [69, 58, 56]. As an example, CloudBurst [69] is a MapReduce-based algorithm for mapping next-generation sequence data to the human genome and other referenc...

متن کامل

Intel 10GBASE-T in TACC Dynamic Hadoop Environment

Due to advances in 10GBASE-T technology, faster interconnects are now more affordable. This gives organizations the opportunity of deploying a cost-effective, high performance Hadoop cluster based on 10 GbE interconnects. As part of this paper, we have partnered with the Texas Advanced Computing Center (TACC) to examine how their unique implementation of Hadoop can benefi t from Intel’s cutting...

متن کامل

MapReduce Frameworks: Comparing Hadoop and HPCC

MapReduce and Hadoop are often used synonymously. For optimal runtime performance, Hadoop users have to consider various implementation details and configuration parameters. When conducting performance experiments with Hadoop on different algorithms, it is hard to choose a set of such implementation optimizations and configuration options which is fair to all algorithms. By fair we mean default...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015